- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources6
- Resource Type
-
0005000001000000
- More
- Availability
-
60
- Author / Contributor
- Filter by Author / Creator
-
-
Hsieh, Cho-Jui (6)
-
Chen, Patrick H (5)
-
Li, Yang (2)
-
Si, Si (2)
-
Chang, Kai-Wei (1)
-
Chelba, Cipirian (1)
-
Chen, Patrick H. (1)
-
Dai, Bo. (1)
-
Jiang, Jyun-Yu (1)
-
Kumar, Sanjiv (1)
-
Li, Liunian Harold (1)
-
Ma, Yukun (1)
-
Wang, Wei (1)
-
Wei, Wei (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Ma, Yukun; Chen, Patrick H; Hsieh, Cho-Jui (, EMNLP)
-
Jiang, Jyun-Yu; Chen, Patrick H; Hsieh, Cho-Jui; Wang, Wei (, WWW)
-
Chen, Patrick H; Si, Si; Kumar, Sanjiv; Li, Yang; Hsieh, Cho-Jui (, International conference on learning representation (ICLR))
-
Li, Liunian Harold; Chen, Patrick H.; Hsieh, Cho-Jui; Chang, Kai-Wei (, Transactions of the Association for Computational Linguistics)Contextual representation models have achieved great success in improving various downstream natural language processing tasks. However, these language-model-based encoders are difficult to train due to their large parameter size and high computational complexity. By carefully examining the training procedure, we observe that the softmax layer, which predicts a distribution of the target word, often induces significant overhead, especially when the vocabulary size is large. Therefore, we revisit the design of the output layer and consider directly predicting the pre-trained embedding of the target word for a given context. When applied to ELMo, the proposed approach achieves a 4-fold speedup and eliminates 80% trainable parameters while achieving competitive performance on downstream tasks. Further analysis shows that the approach maintains the speed advantage under various settings, even when the sentence encoder is scaled up.more » « less
-
Chen, Patrick H; Si, Si; Li, Yang; Chelba, Cipirian; Hsieh, Cho-Jui (, Advances in neural information processing systems)
An official website of the United States government

Full Text Available